test.for.zero
is useful for
comparing the tests in a meaninful and documented way.test.for.zero( xtest,xtrue, tol= 1.0e-8, relative=TRUE, tag=NULL)
tests
subdirectory are [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
To run the tests just attach the fields library and source the testing file. In the the fields source code these are in a subdirectory "tests". Compare the output to the "XXX.Rout.save" text file. Keeping in mind that no test messages should print if all is well, this is really a formality. The main reason these comparisions are provided is for matching the conventions of the R package checking utilities.
test.for.zero
is used to print out the result for each
individual comparison.
Failed tests are potentially very bad and are reported with a
string beginning
"FAILED test value = ... "
If the object test.for.zero.flag exists (it can have any value), all the tests that pass print text beginning,
" PASSED test at tolerance ..."
This startegy means that if all tests succeed nothing and the
object test.for.zero.flag
does not exist then nothing is printed
in the test scripts. This is option simplifies the output scripts for
running through the tests -- no news is good news.
FORM OF COMPARISON: The actual test done is the sum of absolute differnces:
test value = sum( abs(c(xtest) - c( xtrue) ) ) /denom
Where demon
is either mean( abs(c(xtrue)))
for relative error
or 1.0 otherwise.
Note the use of "c" here to stack any structure in xtest and xtrue into a vector.